Re-analyzing the Data from Li et al. (2021) Molecular Basis for Ligand Activation of the Human KCNQ2 Channel using Bayesian Methods

Project summary

Traditional methods for analyzing and reporting data uncertainty assume that the data available is ample and represents the main population, and that any variability is due to random sampling error and not systematic bias. However, real-world situations such as limitations in sampling or biases often violate this assumption. The use of traditional methods to analyze resulting data can lead to unrealistic calculations of uncertainty measurements and overconfidence. Additionally, traditional methods assume fixed parameter estimates, including measures of confidence such as standard deviation or standard error, which can limit their ability to update estimates according to new data.

Bayesian approaches provide an alternative view of the data, by assigning probabilities to events before seeing the data (priors), and then updating those probabilities after the data is seen (posterior distributions), using Bayes theorem:

P(A | B) = P(B | A) * P(A) / P(B)

where:

The project utilizes the BayesPharma package to re-analyze Li et al.’s 2021 paper data employing Bayesian approach, and compares the findings using both traditional and Bayesian approach*. Bayesian approach offers a wider range of uncertainty and a more robust estimation of results, which is especially important given limitations in data acquisition in neuroscience studies.

Note:

The best learning resource for Bayesian approach is the Richard McElreath’s you tube course and his book. *The data was provided by the authors

Steps of the workflow &summary of each step:

In this project, the methodology followed the steps outlined in the unpublished paper by Martin et al. titled ‘BayesPharma: Bayesian methods for pharmacology models’. (A draft called “Manuscript” is uploaded in the Git file)

  1. The first step involves preparing the data for analysis and conducting preliminary visualizations to gain insights about the data.
  2. G-V curve fit involves fitting voltage-dependent activation curves. Prior selection and checking, model fitting, and model checking (posterior and prior/posterior density plots) are included in this step.
  3. Dose-response curve involves modeling the dose-response relationship. Prior selection (prior-only check was skipped), model fitting, and model checking (posterior and prior/posterior density plots) are included in this step.

Index

1. Analysis pipeline and related notes

2. Side notes and general info a. General Q&A around the topic in general as well as the statistical approach (with chatgpt)

3. Draft: The code chunks are inactivated using eval=FALSE The draft code contains a. The incomplete MusyC synergy model codes and analysis b. logbooks etc.

Analysis pipeline

1.1. Data Preparation and Visualization

1.2. Data importing and cleaning

setwd('C:/Users/nilou/Dropbox (University of Michigan)/Rotations/Code')
getwd()
## [1] "C:/Users/nilou/Dropbox (University of Michigan)/Rotations/Code"
library(dplyr)
## Warning: package 'dplyr' was built under R version 4.2.2
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyr)
## Warning: package 'tidyr' was built under R version 4.2.2
df1<-read.csv('fig1b.csv')

#changing column names 

colnames(df1)[3]<-"Voltage"
colnames(df1)[11:15]<-c('run1','run2','run3', 'run4','run5')

#Making a DF just for the test runs

df<-df1[c(2:17,21:36,39:54,57:72,76:91), c(3,11:15)]
df
# the row numbers are according to df1 so not sequential> we try to make them sequential:
# first let's see how many rows do we have: 

nrow(df) # says 80 rows. so we are going to attribute that number for row names 
## [1] 80
rownames(df)<-1:80


# Adding columns "dose" and "drug"
df$dose=10
df$ drug= "MZTZ240"
df$dose[17:32]<-5
df$dose[33:48]<-3
df$dose[49:64]<-1
df$dose[65:80]<-0.1

# we pivot the data to be able to merge with the next figure data 
# first see if the data types are similar 
str(df)
## 'data.frame':    80 obs. of  8 variables:
##  $ Voltage: chr  "-90" "-80" "-70" "-60" ...
##  $ run1   : chr  "0.022528" "0.015431" "0.018977" "0.053645" ...
##  $ run2   : num  0.0481 0.0031 0.0565 0.108 0.2533 ...
##  $ run3   : num  0.0511 0.0443 0.0546 0.1927 0.5399 ...
##  $ run4   : num  -0.0163 -0.0114 0.0397 0.0531 0.2573 ...
##  $ run5   : num  -0.0133 0.0056 0.039 0.071 0.2224 ...
##  $ dose   : num  10 10 10 10 10 10 10 10 10 10 ...
##  $ drug   : chr  "MZTZ240" "MZTZ240" "MZTZ240" "MZTZ240" ...
# run 1 is character> changing it to double
df[,2]<-as.double(df[,2])
# check 
str(df)
## 'data.frame':    80 obs. of  8 variables:
##  $ Voltage: chr  "-90" "-80" "-70" "-60" ...
##  $ run1   : num  0.0225 0.0154 0.019 0.0536 0.2309 ...
##  $ run2   : num  0.0481 0.0031 0.0565 0.108 0.2533 ...
##  $ run3   : num  0.0511 0.0443 0.0546 0.1927 0.5399 ...
##  $ run4   : num  -0.0163 -0.0114 0.0397 0.0531 0.2573 ...
##  $ run5   : num  -0.0133 0.0056 0.039 0.071 0.2224 ...
##  $ dose   : num  10 10 10 10 10 10 10 10 10 10 ...
##  $ drug   : chr  "MZTZ240" "MZTZ240" "MZTZ240" "MZTZ240" ...
df<-df %>% 
  pivot_longer(c('run1','run2','run3', 'run4','run5'), names_to = "run number", values_to = "Acquisition")
# 


#Let's add the figure 1d data 
df2<-read.csv('fig1d.csv')

#changing column names 
colnames(df2)[2]<-"Voltage"
colnames(df2)[12:17]<-c('run1','run2','run3', 'run4','run5', 'run6')
#Making a DF just for the test runs

df3<-df2[c(1:16,19:34,38:53,56:71,75:90,94:109), c(2,12:17)]
df3
# Adding columns "dose" and "drug"
df3$dose=30

df3$ drug= "RTG"

nrow(df3)
## [1] 96
rownames(df3)<-1:96

df3$dose[17:32]<-10
df3$dose[33:48]<-5
df3$dose[49:64]<-3
df3$dose[65:80]<-1
df3$dose[81:96]<-0.3
# since the column numbers don't match let's pivot first 
# error 1: the data type for one of the columns ( run1) is character and not double, so I have to change it to character
df3[,2]<-as.double(df3[,2])

df3<-df3 %>% 
  pivot_longer(c('run1','run2','run3', 'run4','run5', 'run6'), names_to = "run number", values_to = "Acquisition")
# 



# Let's merge df and df3 into DF
DF <- rbind(df, df3)
# we cannot use the as.numeric with a tibble( the result of the pivot longer)
# the mutate function dplyr would be a good set up for a tibble
DF<-mutate(DF, Voltage=as.numeric(Voltage))
 str(DF)
## tibble [976 × 5] (S3: tbl_df/tbl/data.frame)
##  $ Voltage    : num [1:976] -90 -90 -90 -90 -90 -80 -80 -80 -80 -80 ...
##  $ dose       : num [1:976] 10 10 10 10 10 10 10 10 10 10 ...
##  $ drug       : chr [1:976] "MZTZ240" "MZTZ240" "MZTZ240" "MZTZ240" ...
##  $ run number : chr [1:976] "run1" "run2" "run3" "run4" ...
##  $ Acquisition: num [1:976] 0.0225 0.0481 0.0511 -0.0163 -0.0133 ...

1.3. Formatting the data frames according to the requirements of the model

library(dplyr) # or you can call the package with the :: package name
DF<-DF|>
  dplyr::mutate (
    dose=as.factor(dose),
    Voltage= as.numeric(Voltage), 
    response=as.numeric(Acquisition),
    log_dose=Voltage,
    drug_dose=dose)|> 
    dplyr::filter(!is.na(Acquisition))
DF
library(dplyr) # or you can call the package with the :: package name
df3<-df3|>
  dplyr::mutate (
    dose=as.factor(dose),
    Voltage= as.numeric(Voltage), 
    response=as.numeric(Acquisition),
    log_dose=Voltage,
    drug_dose=dose)|> # pipe function here takes the result of the previous terms and feeds to the function after the pipe
  #filter the rows with NA. the function will keep the lines that meet the criteria. so the lines that are not NA
  dplyr::filter(!is.na(Acquisition))
# to check whether the function worked
df3
str(df3)
## tibble [512 × 8] (S3: tbl_df/tbl/data.frame)
##  $ Voltage    : num [1:512] -90 -90 -90 -90 -90 -90 -80 -80 -80 -80 ...
##  $ dose       : Factor w/ 6 levels "0.3","1","3",..: 6 6 6 6 6 6 6 6 6 6 ...
##  $ drug       : chr [1:512] "RTG" "RTG" "RTG" "RTG" ...
##  $ run number : chr [1:512] "run1" "run2" "run3" "run4" ...
##  $ Acquisition: num [1:512] 0.02215 0.01113 0.01276 0.00685 -0.03602 ...
##  $ response   : num [1:512] 0.02215 0.01113 0.01276 0.00685 -0.03602 ...
##  $ log_dose   : num [1:512] -90 -90 -90 -90 -90 -90 -80 -80 -80 -80 ...
##  $ drug_dose  : Factor w/ 6 levels "0.3","1","3",..: 6 6 6 6 6 6 6 6 6 6 ...
# number of rows are less ( got rid of NA rows)

Also formatting df to be like df3

df<-df|>
  dplyr::mutate (
    dose=as.numeric(dose),
    Voltage= as.numeric(Voltage), 
    response=as.numeric(Acquisition))|> 
    dplyr::filter(!is.na(Acquisition)) 
df

1.4. Plotting

A general visualization of Conductance vs. Voltage provides a general overview of the data and assures that the data importing and cleaning went well.

library(ggplot2)
## Warning: package 'ggplot2' was built under R version 4.2.2

1.5. Facet based on dosage

Note about the backticks:

When using ggplot to create a graph, column names that contain spaces can cause issues. For example, if we have a column called “run number,” ggplot may not recognize it as a column containing values like other column names such as “Acquisition” or “drug”. To solve this issue, we can enclose the column name in backticks (`) when calling it in our code.

In a specific case, we wanted to add the group = run number term to our code to draw a smooth line between the data points of the same run, but the graph was only running things by “drug” and “color” initially. This led to a zigzag line connecting all the points from the same run, even for different drugs. By introducing both terms, we were able to draw a smooth line while correctly grouping the data by run number and drug.

p <- ggplot(DF, aes(Voltage, Acquisition, colour=drug, group=`run number`)) + geom_point() +geom_line()
# state the problem here

p1 <- ggplot(DF, aes(Voltage, Acquisition, colour=drug, group=paste(drug, `run number`))) + geom_point() +geom_line()

p1 + facet_grid(rows = vars(dose))

### fit the bayesian model ###
library(BayesPharma)
# filter the data by drug, we start by RTG in the df3
## install the newest version of the package from the github. the code for that is remotes :: install_github ( i guess it also 
#has from other platforms)
### working with factors in R
AB<-factor(c('a','b','c','b'))
AB
## [1] a b c b
## Levels: a b c
remotes::install_github('maomlab/BayesPharma')

library(BayesPharma)
library(brms)
## Warning: package 'brms' was built under R version 4.2.2

#2. Voltage-dependent activation curves (G-V curves)

2.1. Prior selection and check:

To check if the prior is within a reasonable range, we first examined the default priors in the sigmoid_agonist_prior function using the help() function. However, since the formula is set for the log dose of a drug and not voltage, we needed to redefine the default priors for EC50 to be plausible for voltage. Based on the sigmoid figures in the paper by Li et al. 2021, we centered the EC50 around -20 with a broad interval to remain not too restrictive. We then used MCMC sampling to estimate the prior distribution and plotted it to visualize it. Additionally, we needed to redefine the init value, which specifies the initial values of the parameters that the MCMC algorithm starts with. The default value of -6 (as can be seen in sigmoid_agonist_init specifications) needed to be reset to -20 to be suitable for voltage.

2.1.3. Building a model just for priors

kor_sample_prior <- BayesPharma::sigmoid_agonist_model(
  data = DF |> dplyr::select(drug, log_dose, response),
   prior = BayesPharma::sigmoid_agonist_prior(top = 1,
          ec50 = brms::prior(normal(-20, 40), nlpar = "ec50")),
          #set the intialization to a new value
  init =BayesPharma::sigmoid_agonist_init(ec50 = -20),
          sample_prior = "only") 
## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 9.3e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.93 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.805 seconds (Warm-up)
## Chain 1:                0.71 seconds (Sampling)
## Chain 1:                1.515 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 8e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.632 seconds (Warm-up)
## Chain 2:                0.984 seconds (Sampling)
## Chain 2:                1.616 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 8e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.748 seconds (Warm-up)
## Chain 3:                0.542 seconds (Sampling)
## Chain 3:                1.29 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 1.3e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.625 seconds (Warm-up)
## Chain 4:                0.734 seconds (Sampling)
## Chain 4:                1.359 seconds (Total)
## Chain 4:

Taking a look at the model specs:

kor_sample_prior
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: response ~ sigmoid(ec50, hill, top, bottom, log_dose) 
##          ec50 ~ 1
##          hill ~ 1
##          top ~ 1
##          bottom ~ 1
##    Data: data (Number of observations: 912) 
##   Draws: 4 chains, each with iter = 8000; warmup = 4000; thin = 1;
##          total post-warmup draws = 16000
## 
## Population-Level Effects: 
##                  Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## ec50_Intercept     -20.06     40.16   -99.03    58.57 1.00     9467     9292
## hill_Intercept       1.29      0.80     0.08     3.06 1.00     6349     4156
## bottom_Intercept    -0.00      0.50    -0.98     0.98 1.00    10004     8443
## top_Intercept        1.00      0.00     1.00     1.00   NA       NA       NA
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     2.76      3.23     0.08    10.47 1.00     9819     6067
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
  • Interpretation:

    • Estimate: the mean value of the posterior distribution for sigma is 2.75.

    • Est. Error: the standard error of the posterior distribution for sigma is 3.11.

    • l-95% CI: the lower bound of the 95% credible interval for sigma is 0.07.

    • U-95% CI: the upper bound of the 95% credible interval for sigma is 10.76.

    • Rhat: the Gelman-Rubin statistic, which measures the convergence of multiple chains, is 1.00, indicating good convergence.

    • Bulk_ESS: the effective sample size (ESS) of the posterior distribution for sigma is 10216, indicating a sufficient number of samples to make reliable inferences.

    • Tail_ESS: the ESS for the tails of the posterior distribution for sigma is 5829, indicating a sufficient number of samples to make reliable inferences about extreme values.

  • In conclusion, based on this output, we can say that there is a 95% chance that the true value of sigma lies between 0.07 and 10.76, and that the mean estimate for sigma is 2.75. However, the large standard error (Est. Error) suggests that there is a lot of uncertainty in this estimate, which is predictable given the fact that this is a prior distribution. Overall, the Gelman-Rubin statistic and effective sample sizes indicate good convergence and sufficient sample sizes for making reliable inferences.

2.1.4 Prior distribution plots

kor_sample_prior |>
BayesPharma::density_distributions_plot()

The Plots depicts the selected priors and provides a visual mean to check the numerical values before we move to the next stp: the model fit.

2.2. Model fitting

###2.2.1 Test run with df3 (RTG)-> df3 belongs to RTG:

Model1: We first test the modelling with only the df3 data frame, which belongs to the RTG. We will later use the DF data frame (Model2) that contains the data for both drugs as opposed to just one in df3.

# lets look at the default priors
BayesPharma::sigmoid_agonist_prior()
Model1<-BayesPharma::sigmoid_agonist_model (
  data=df3, 
  formula=BayesPharma::sigmoid_agonist_formula(predictors= 0+ dose),
  #we need to override the default prior for the EC50 to encompass the corresponding values of Voltage instead of drug dose
  prior=BayesPharma::sigmoid_agonist_prior(
    ec50 = brms::prior(normal(-20, 40), nlpar = "ec50")),
  # we have to change the initial if we look at it is still at the last default point of -9
  # we need to switch it to the new one: -20
  init = BayesPharma::sigmoid_agonist_init(ec50 = -20)
    )
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000542 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.42 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 90.192 seconds (Warm-up)
## Chain 1:                66.637 seconds (Sampling)
## Chain 1:                156.829 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: 
## Chain 2: Gradient evaluation took 0.000254 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.54 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 90.869 seconds (Warm-up)
## Chain 2:                71.327 seconds (Sampling)
## Chain 2:                162.196 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: 
## Chain 3: Gradient evaluation took 0.000338 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 3.38 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 89.276 seconds (Warm-up)
## Chain 3:                91.868 seconds (Sampling)
## Chain 3:                181.144 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: 
## Chain 4: Gradient evaluation took 0.000266 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 2.66 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 115.965 seconds (Warm-up)
## Chain 4:                89.06 seconds (Sampling)
## Chain 4:                205.025 seconds (Total)
## Chain 4:

A note about the process:

The process involved taking the formula and converting it into Stan code, which is a programming language used for Bayesian modeling. The Stan code is then passed to a C++ compiler for inference using MCMC sampling. Fitting and evaluating the model for the DF data frame containing both drugs:

###2.2.2. Main run with DF: Model2

Model2 Note:

Initially, we fit the model for both drugs together, but it did not make sense later on. To address this issue, we revised the model by adding a predictor term of 0 + dose:drug instead of the previous form of 0 + dose. This approach allows us to use one model for both drugs instead of fitting separate models for each drug, which would be more complex. However, one caveat of merging the data of two drugs using one model is that we are assuming that the resulting sigma (i.e., the Gaussian around each point of the hypothetical curve we draw with the resulting model parameter) would be the same for both drugs. More details can be found in the draft section.

Model2<-BayesPharma::sigmoid_agonist_model (
  data=DF, 
  #fitting the model for each dose and drug 
  formula=BayesPharma::sigmoid_agonist_formula(predictors= 0+ dose:drug),
  #we need to override the default prior for the EC50 to encompass the corresponding values of Voltage instead of drug dose
  prior=BayesPharma::sigmoid_agonist_prior(
    ec50 = brms::prior(normal(-20, 40), nlpar = "ec50")),
  # we have to change the initial if we look at it is still at the last default point of -9
  # we need to switch it to the new one: -20
  init = BayesPharma::sigmoid_agonist_init(ec50 = -20)
)
## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: Rejecting initial value:
## Chain 1:   Gradient evaluated at the initial value is not finite.
## Chain 1:   Stan can't start sampling from this initial value.
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000503 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.03 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 303.986 seconds (Warm-up)
## Chain 1:                436.547 seconds (Sampling)
## Chain 1:                740.533 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: Rejecting initial value:
## Chain 2:   Gradient evaluated at the initial value is not finite.
## Chain 2:   Stan can't start sampling from this initial value.
## Chain 2: 
## Chain 2: Gradient evaluation took 0.001667 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 16.67 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 380.417 seconds (Warm-up)
## Chain 2:                106.715 seconds (Sampling)
## Chain 2:                487.132 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: Rejecting initial value:
## Chain 3:   Gradient evaluated at the initial value is not finite.
## Chain 3:   Stan can't start sampling from this initial value.
## Chain 3: 
## Chain 3: Gradient evaluation took 0.000409 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 4.09 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 154.762 seconds (Warm-up)
## Chain 3:                108.582 seconds (Sampling)
## Chain 3:                263.344 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: Rejecting initial value:
## Chain 4:   Gradient evaluated at the initial value is not finite.
## Chain 4:   Stan can't start sampling from this initial value.
## Chain 4: 
## Chain 4: Gradient evaluation took 0.000449 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.49 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 193.87 seconds (Warm-up)
## Chain 4:                106.742 seconds (Sampling)
## Chain 4:                300.612 seconds (Total)
## Chain 4:

A note about the intercepts in the formula syntax for formulas: R was designed for frequentist stats models. this one is Bayes, is a combination of different platforms and grown our of the traditional. You have response and distribution of predictors. by default there is this intercept at time 0. # if you want to to have a zero so it starts from zero, you set it to zero. when you have the voltage the parameters time voltage will give you the parameter in the linear model it is the slope: how much the Y will change by one unit of X change ( that is the slope). this was in the linear model. but here 0 + voltage will predict the each of the non-linear predictors.

2.3. Model check

2.3.1. Analyzing the model fit

summary(Model2)
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: response ~ sigmoid(ec50, hill, top, bottom, log_dose) 
##          ec50 ~ 0 + dose:drug
##          hill ~ 0 + dose:drug
##          top ~ 0 + dose:drug
##          bottom ~ 0 + dose:drug
##    Data: data (Number of observations: 912) 
##   Draws: 4 chains, each with iter = 8000; warmup = 4000; thin = 1;
##          total post-warmup draws = 16000
## 
## Population-Level Effects: 
##                            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## ec50_dose0.1:drugMZTZ240     -11.63      1.47   -14.44    -8.75 1.00    22748
## ec50_dose0.3:drugMZTZ240     -20.19     40.74  -100.33    59.56 1.00    33646
## ec50_dose1:drugMZTZ240       -19.32      1.23   -21.72   -16.93 1.00    21725
## ec50_dose3:drugMZTZ240       -32.17      1.08   -34.30   -30.09 1.00    22541
## ec50_dose5:drugMZTZ240       -32.14      0.97   -34.07   -30.23 1.00    20867
## ec50_dose10:drugMZTZ240      -43.79      0.83   -45.46   -42.15 1.00    24052
## ec50_dose30:drugMZTZ240      -20.04     39.63   -98.06    57.56 1.00    30828
## ec50_dose0.1:drugRTG         -20.12     40.04   -98.00    57.13 1.00    35040
## ec50_dose0.3:drugRTG         -18.90      1.21   -21.31   -16.52 1.00    22130
## ec50_dose1:drugRTG           -24.70      1.42   -27.50   -21.93 1.00    23060
## ec50_dose3:drugRTG           -35.56      1.28   -38.11   -33.10 1.00    20683
## ec50_dose5:drugRTG           -39.80      1.24   -42.23   -37.45 1.00    20560
## ec50_dose10:drugRTG          -45.07      1.38   -47.92   -42.48 1.00    18907
## ec50_dose30:drugRTG          -42.99      1.36   -45.77   -40.42 1.00    17346
## hill_dose0.1:drugMZTZ240       0.02      0.00     0.02     0.03 1.00    15070
## hill_dose0.3:drugMZTZ240       1.28      0.80     0.07     3.02 1.00    11379
## hill_dose1:drugMZTZ240         0.03      0.00     0.03     0.04 1.00    16089
## hill_dose3:drugMZTZ240         0.03      0.00     0.03     0.04 1.00    17057
## hill_dose5:drugMZTZ240         0.04      0.00     0.03     0.05 1.00    19280
## hill_dose10:drugMZTZ240        0.05      0.00     0.04     0.06 1.00    20580
## hill_dose30:drugMZTZ240        1.28      0.79     0.07     3.00 1.00    11397
## hill_dose0.1:drugRTG           1.29      0.78     0.09     3.01 1.00    12486
## hill_dose0.3:drugRTG           0.03      0.00     0.03     0.04 1.00    16845
## hill_dose1:drugRTG             0.03      0.00     0.02     0.03 1.00    16425
## hill_dose3:drugRTG             0.03      0.00     0.03     0.04 1.00    14568
## hill_dose5:drugRTG             0.03      0.00     0.03     0.04 1.00    15123
## hill_dose10:drugRTG            0.03      0.00     0.02     0.03 1.00    14418
## hill_dose30:drugRTG            0.03      0.00     0.02     0.03 1.00    13588
## top_dose0.1:drugMZTZ240        0.99      0.02     0.95     1.04 1.00    16125
## top_dose0.3:drugMZTZ240        1.00      0.50     0.02     2.00 1.00    34049
## top_dose1:drugMZTZ240          0.92      0.02     0.89     0.95 1.00    16956
## top_dose3:drugMZTZ240          0.96      0.01     0.94     0.99 1.00    22542
## top_dose5:drugMZTZ240          0.95      0.01     0.93     0.97 1.00    22585
## top_dose10:drugMZTZ240         0.93      0.01     0.91     0.95 1.00    24198
## top_dose30:drugMZTZ240         1.00      0.50     0.01     1.97 1.00    32432
## top_dose0.1:drugRTG            1.00      0.50     0.01     1.97 1.00    31304
## top_dose0.3:drugRTG            0.97      0.02     0.94     1.00 1.00    18569
## top_dose1:drugRTG              0.98      0.02     0.95     1.01 1.00    19190
## top_dose3:drugRTG              0.95      0.01     0.93     0.98 1.00    20260
## top_dose5:drugRTG              0.96      0.01     0.94     0.99 1.00    20656
## top_dose10:drugRTG             0.96      0.01     0.94     0.99 1.00    21702
## top_dose30:drugRTG             0.96      0.01     0.94     0.99 1.00    19716
## bottom_dose0.1:drugMZTZ240     0.01      0.02    -0.03     0.04 1.00    17446
## bottom_dose0.3:drugMZTZ240     0.00      0.50    -0.98     0.96 1.00    31800
## bottom_dose1:drugMZTZ240      -0.02      0.02    -0.05     0.02 1.00    17608
## bottom_dose3:drugMZTZ240      -0.02      0.02    -0.06     0.02 1.00    17743
## bottom_dose5:drugMZTZ240       0.00      0.02    -0.03     0.04 1.00    18529
## bottom_dose10:drugMZTZ240     -0.00      0.02    -0.04     0.03 1.00    19645
## bottom_dose30:drugMZTZ240      0.00      0.50    -0.97     0.96 1.00    33961
## bottom_dose0.1:drugRTG        -0.00      0.51    -0.99     0.98 1.00    31067
## bottom_dose0.3:drugRTG         0.02      0.02    -0.02     0.05 1.00    18486
## bottom_dose1:drugRTG           0.00      0.02    -0.04     0.05 1.00    16362
## bottom_dose3:drugRTG          -0.01      0.02    -0.06     0.03 1.00    15098
## bottom_dose5:drugRTG          -0.05      0.03    -0.10    -0.00 1.00    14269
## bottom_dose10:drugRTG         -0.04      0.03    -0.10     0.02 1.00    14745
## bottom_dose30:drugRTG         -0.07      0.03    -0.14    -0.02 1.00    13225
##                            Tail_ESS
## ec50_dose0.1:drugMZTZ240      11487
## ec50_dose0.3:drugMZTZ240      11513
## ec50_dose1:drugMZTZ240        12857
## ec50_dose3:drugMZTZ240        13392
## ec50_dose5:drugMZTZ240        12067
## ec50_dose10:drugMZTZ240       12363
## ec50_dose30:drugMZTZ240       11444
## ec50_dose0.1:drugRTG          11303
## ec50_dose0.3:drugRTG          11982
## ec50_dose1:drugRTG            11929
## ec50_dose3:drugRTG            11507
## ec50_dose5:drugRTG            11671
## ec50_dose10:drugRTG           11703
## ec50_dose30:drugRTG           10601
## hill_dose0.1:drugMZTZ240      11303
## hill_dose0.3:drugMZTZ240       6615
## hill_dose1:drugMZTZ240        12365
## hill_dose3:drugMZTZ240        12760
## hill_dose5:drugMZTZ240        13332
## hill_dose10:drugMZTZ240       14175
## hill_dose30:drugMZTZ240        6015
## hill_dose0.1:drugRTG           7396
## hill_dose0.3:drugRTG          12366
## hill_dose1:drugRTG            12469
## hill_dose3:drugRTG            11226
## hill_dose5:drugRTG            12112
## hill_dose10:drugRTG           11651
## hill_dose30:drugRTG           10437
## top_dose0.1:drugMZTZ240       11239
## top_dose0.3:drugMZTZ240       10675
## top_dose1:drugMZTZ240         12141
## top_dose3:drugMZTZ240         12178
## top_dose5:drugMZTZ240         12378
## top_dose10:drugMZTZ240        12811
## top_dose30:drugMZTZ240        11597
## top_dose0.1:drugRTG           11181
## top_dose0.3:drugRTG           12152
## top_dose1:drugRTG             12895
## top_dose3:drugRTG             12390
## top_dose5:drugRTG             12922
## top_dose10:drugRTG            12421
## top_dose30:drugRTG            12825
## bottom_dose0.1:drugMZTZ240    12176
## bottom_dose0.3:drugMZTZ240    11097
## bottom_dose1:drugMZTZ240      12223
## bottom_dose3:drugMZTZ240      12335
## bottom_dose5:drugMZTZ240      13135
## bottom_dose10:drugMZTZ240     12454
## bottom_dose30:drugMZTZ240     11967
## bottom_dose0.1:drugRTG        11116
## bottom_dose0.3:drugRTG        10908
## bottom_dose1:drugRTG          11859
## bottom_dose3:drugRTG          10226
## bottom_dose5:drugRTG          11126
## bottom_dose10:drugRTG         10874
## bottom_dose30:drugRTG          8954
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.06      0.00     0.06     0.06 1.00    27215    11306
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Sum<-summary(Model2)
Sum
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: response ~ sigmoid(ec50, hill, top, bottom, log_dose) 
##          ec50 ~ 0 + dose:drug
##          hill ~ 0 + dose:drug
##          top ~ 0 + dose:drug
##          bottom ~ 0 + dose:drug
##    Data: data (Number of observations: 912) 
##   Draws: 4 chains, each with iter = 8000; warmup = 4000; thin = 1;
##          total post-warmup draws = 16000
## 
## Population-Level Effects: 
##                            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## ec50_dose0.1:drugMZTZ240     -11.63      1.47   -14.44    -8.75 1.00    22748
## ec50_dose0.3:drugMZTZ240     -20.19     40.74  -100.33    59.56 1.00    33646
## ec50_dose1:drugMZTZ240       -19.32      1.23   -21.72   -16.93 1.00    21725
## ec50_dose3:drugMZTZ240       -32.17      1.08   -34.30   -30.09 1.00    22541
## ec50_dose5:drugMZTZ240       -32.14      0.97   -34.07   -30.23 1.00    20867
## ec50_dose10:drugMZTZ240      -43.79      0.83   -45.46   -42.15 1.00    24052
## ec50_dose30:drugMZTZ240      -20.04     39.63   -98.06    57.56 1.00    30828
## ec50_dose0.1:drugRTG         -20.12     40.04   -98.00    57.13 1.00    35040
## ec50_dose0.3:drugRTG         -18.90      1.21   -21.31   -16.52 1.00    22130
## ec50_dose1:drugRTG           -24.70      1.42   -27.50   -21.93 1.00    23060
## ec50_dose3:drugRTG           -35.56      1.28   -38.11   -33.10 1.00    20683
## ec50_dose5:drugRTG           -39.80      1.24   -42.23   -37.45 1.00    20560
## ec50_dose10:drugRTG          -45.07      1.38   -47.92   -42.48 1.00    18907
## ec50_dose30:drugRTG          -42.99      1.36   -45.77   -40.42 1.00    17346
## hill_dose0.1:drugMZTZ240       0.02      0.00     0.02     0.03 1.00    15070
## hill_dose0.3:drugMZTZ240       1.28      0.80     0.07     3.02 1.00    11379
## hill_dose1:drugMZTZ240         0.03      0.00     0.03     0.04 1.00    16089
## hill_dose3:drugMZTZ240         0.03      0.00     0.03     0.04 1.00    17057
## hill_dose5:drugMZTZ240         0.04      0.00     0.03     0.05 1.00    19280
## hill_dose10:drugMZTZ240        0.05      0.00     0.04     0.06 1.00    20580
## hill_dose30:drugMZTZ240        1.28      0.79     0.07     3.00 1.00    11397
## hill_dose0.1:drugRTG           1.29      0.78     0.09     3.01 1.00    12486
## hill_dose0.3:drugRTG           0.03      0.00     0.03     0.04 1.00    16845
## hill_dose1:drugRTG             0.03      0.00     0.02     0.03 1.00    16425
## hill_dose3:drugRTG             0.03      0.00     0.03     0.04 1.00    14568
## hill_dose5:drugRTG             0.03      0.00     0.03     0.04 1.00    15123
## hill_dose10:drugRTG            0.03      0.00     0.02     0.03 1.00    14418
## hill_dose30:drugRTG            0.03      0.00     0.02     0.03 1.00    13588
## top_dose0.1:drugMZTZ240        0.99      0.02     0.95     1.04 1.00    16125
## top_dose0.3:drugMZTZ240        1.00      0.50     0.02     2.00 1.00    34049
## top_dose1:drugMZTZ240          0.92      0.02     0.89     0.95 1.00    16956
## top_dose3:drugMZTZ240          0.96      0.01     0.94     0.99 1.00    22542
## top_dose5:drugMZTZ240          0.95      0.01     0.93     0.97 1.00    22585
## top_dose10:drugMZTZ240         0.93      0.01     0.91     0.95 1.00    24198
## top_dose30:drugMZTZ240         1.00      0.50     0.01     1.97 1.00    32432
## top_dose0.1:drugRTG            1.00      0.50     0.01     1.97 1.00    31304
## top_dose0.3:drugRTG            0.97      0.02     0.94     1.00 1.00    18569
## top_dose1:drugRTG              0.98      0.02     0.95     1.01 1.00    19190
## top_dose3:drugRTG              0.95      0.01     0.93     0.98 1.00    20260
## top_dose5:drugRTG              0.96      0.01     0.94     0.99 1.00    20656
## top_dose10:drugRTG             0.96      0.01     0.94     0.99 1.00    21702
## top_dose30:drugRTG             0.96      0.01     0.94     0.99 1.00    19716
## bottom_dose0.1:drugMZTZ240     0.01      0.02    -0.03     0.04 1.00    17446
## bottom_dose0.3:drugMZTZ240     0.00      0.50    -0.98     0.96 1.00    31800
## bottom_dose1:drugMZTZ240      -0.02      0.02    -0.05     0.02 1.00    17608
## bottom_dose3:drugMZTZ240      -0.02      0.02    -0.06     0.02 1.00    17743
## bottom_dose5:drugMZTZ240       0.00      0.02    -0.03     0.04 1.00    18529
## bottom_dose10:drugMZTZ240     -0.00      0.02    -0.04     0.03 1.00    19645
## bottom_dose30:drugMZTZ240      0.00      0.50    -0.97     0.96 1.00    33961
## bottom_dose0.1:drugRTG        -0.00      0.51    -0.99     0.98 1.00    31067
## bottom_dose0.3:drugRTG         0.02      0.02    -0.02     0.05 1.00    18486
## bottom_dose1:drugRTG           0.00      0.02    -0.04     0.05 1.00    16362
## bottom_dose3:drugRTG          -0.01      0.02    -0.06     0.03 1.00    15098
## bottom_dose5:drugRTG          -0.05      0.03    -0.10    -0.00 1.00    14269
## bottom_dose10:drugRTG         -0.04      0.03    -0.10     0.02 1.00    14745
## bottom_dose30:drugRTG         -0.07      0.03    -0.14    -0.02 1.00    13225
##                            Tail_ESS
## ec50_dose0.1:drugMZTZ240      11487
## ec50_dose0.3:drugMZTZ240      11513
## ec50_dose1:drugMZTZ240        12857
## ec50_dose3:drugMZTZ240        13392
## ec50_dose5:drugMZTZ240        12067
## ec50_dose10:drugMZTZ240       12363
## ec50_dose30:drugMZTZ240       11444
## ec50_dose0.1:drugRTG          11303
## ec50_dose0.3:drugRTG          11982
## ec50_dose1:drugRTG            11929
## ec50_dose3:drugRTG            11507
## ec50_dose5:drugRTG            11671
## ec50_dose10:drugRTG           11703
## ec50_dose30:drugRTG           10601
## hill_dose0.1:drugMZTZ240      11303
## hill_dose0.3:drugMZTZ240       6615
## hill_dose1:drugMZTZ240        12365
## hill_dose3:drugMZTZ240        12760
## hill_dose5:drugMZTZ240        13332
## hill_dose10:drugMZTZ240       14175
## hill_dose30:drugMZTZ240        6015
## hill_dose0.1:drugRTG           7396
## hill_dose0.3:drugRTG          12366
## hill_dose1:drugRTG            12469
## hill_dose3:drugRTG            11226
## hill_dose5:drugRTG            12112
## hill_dose10:drugRTG           11651
## hill_dose30:drugRTG           10437
## top_dose0.1:drugMZTZ240       11239
## top_dose0.3:drugMZTZ240       10675
## top_dose1:drugMZTZ240         12141
## top_dose3:drugMZTZ240         12178
## top_dose5:drugMZTZ240         12378
## top_dose10:drugMZTZ240        12811
## top_dose30:drugMZTZ240        11597
## top_dose0.1:drugRTG           11181
## top_dose0.3:drugRTG           12152
## top_dose1:drugRTG             12895
## top_dose3:drugRTG             12390
## top_dose5:drugRTG             12922
## top_dose10:drugRTG            12421
## top_dose30:drugRTG            12825
## bottom_dose0.1:drugMZTZ240    12176
## bottom_dose0.3:drugMZTZ240    11097
## bottom_dose1:drugMZTZ240      12223
## bottom_dose3:drugMZTZ240      12335
## bottom_dose5:drugMZTZ240      13135
## bottom_dose10:drugMZTZ240     12454
## bottom_dose30:drugMZTZ240     11967
## bottom_dose0.1:drugRTG        11116
## bottom_dose0.3:drugRTG        10908
## bottom_dose1:drugRTG          11859
## bottom_dose3:drugRTG          10226
## bottom_dose5:drugRTG          11126
## bottom_dose10:drugRTG         10874
## bottom_dose30:drugRTG          8954
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     0.06      0.00     0.06     0.06 1.00    27215    11306
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

2.3.2. Model interpretation

These four pieces of information describe the estimated distribution of the parameter sigma, which is a measure of the variability of the response variable that cannot be explained by the model.these numbers suggest that the model is performing well in 1, estimating sigma and 2. given the narrow range, that there is little variability in the response variable that cannot be explained by the model. Rhat should be close to 1.00, which means the MCMC has converged. The Bulk_ESS and Tail_ESS values are 27296 and 11444, respectively, which are both high and indicate that the MCMC algorithm has converged well and the samples are representative of the posterior distribution.

  • Using the visualizations in the next step, we aim at:
  1. Visualizing the estimated values and their ranges.

  2. Providing diagnostic plots for both prior and posterior distributions to examine their relationship. Ideally, we are looking for a plot like the one shown below:

  3. Specifications of the MCMC sampling using trace plots ( e.g., did it converge? do we see autocorrelations?) ideally our traceplots should look like fat, hairy caterpillars :))

2.3.3.Posterior sampling check

2.3.3.1. MCMC convergence check: Trace plots

library(bayesplot)
## Warning: package 'bayesplot' was built under R version 4.2.3
## This is bayesplot version 1.10.0
## - Online documentation and vignettes at mc-stan.org/bayesplot
## - bayesplot theme set to bayesplot::theme_default()
##    * Does _not_ affect other ggplot2 plots
##    * See ?bayesplot_theme_set for details on theme setting
Trace_m2<- Model2 |>
bayesplot::mcmc_trace()
# let's view it 
Trace_m2

We have a couple of fat hairy caterpillars, which means the MCMC converged.

2.3.3.2. Compare prior and posterior distributions

Two sets of graphs:

2.3.3.2. A. Distribution plots for prior and posterior

The posterior distribution of the parameters is displayed along with the mean and confidence intervals. The white areas represent the confidence intervals and indicate where 95% of the data is expected to lie. A narrow range suggests that the data is distributed more tightly around the mean, while a wider range indicates a more dispersed distribution.

BayesPharma::posterior_densities_plot(
model = Model2,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label = "")

Notes:

1. The model is set to 0+drug:dose, providing parameter estimates for each possible drug/dose combination, regardless of the presence of data for those combinations. For example, we lack data for 0.3 and 30 mM of ZTZ, and for 0.1mM RTG, leading to non-existent prior and posterior distributions and curves for these combinations.

2. The prior-posterior graphs reveal two key insights. Firstly, the priors are wide enough to provide both informative and non-restrictive information, as previously demonstrated in the prior check graphs. Secondly, the posteriors have become narrower and taller, indicating a more condensed distribution around a particular tendency.

3. The posterior distribution highlights the 95% data interval (represented by the white zone vs. grey zone). This graph represents the level of uncertainty in the posterior distribution.

2.3.3.2.B. Posterior draws

BayesPharma::posterior_draws_plot(
model = Model2,
title = "")

Interpretation: Here, we again see the same issue with three of the graphs pertaining to modelling non-existing data (see point 1 above). Other graphs show similar data to that of the paper. As the paper only included the numerical measurement of the ec50 of the treatment, to gain a comparable insight we need to move to the next step and fitting the dose-response curves.

3. Dose-response curves

To fit the dose-response curves we need two main ingredients:

  1. Delta V1/2 (response)
  1. Drug (independent variable)

3.1. Data preparation

3.1.1. Vdose dataframe with the v1/2 of Model2

library(posterior)
## Warning: package 'posterior' was built under R version 4.2.2
## This is posterior version 1.4.0
## 
## Attaching package: 'posterior'
## The following object is masked from 'package:bayesplot':
## 
##     rhat
## The following object is masked from 'package:brms':
## 
##     rhat
## The following objects are masked from 'package:stats':
## 
##     mad, sd, var
## The following objects are masked from 'package:base':
## 
##     %in%, match
ds<-summarize_draws(Model2)
ds

Notes for the code chunk below

  • The doses in the package are in Molar and not micro-molar. So we have to convert the paper values into Molar ( ergo the -6 in the code below)
  • “.before = 1” in the mutate function mean that we want the new column to be before the Nth column. Here N=1. So we want the new column be the first one.

Extract the relevant variables to form the “vdose” data frame

library(dplyr)

#vdose: dose column is in log values -6 to turn
vdose<-ds |>
      filter(variable |> stringr:: str_detect("ec50"))|>
  #delete the part of the name we do not want PLUS converting the values as numeric and take its log. 
      mutate(log_dose=variable |>stringr:: str_replace("b_ec50_dose", "")|> stringr:: str_replace(":drugMZTZ240", "")|> stringr:: str_replace(":drugRTG", "")|> as.numeric()|>log10(), .before = 1)|> 
  #convert micromolar to molar by subtracting 6 ( which would be log of 10-6)
        mutate(log_dose=log_dose-6)|>
      # creates a variable called log_dose and is numeric ( instead of previously string)
#Next we need the response (Y axis), we can extract, we create another column that has the "mean" column values in 
      mutate(response=mean |> as.numeric(),.before = 2)|>
      mutate(dosename=variable |>stringr:: str_replace("b_ec50_", ""),.before = 3)|>
      mutate(dose = c(0.1, 0.3, 1, 3, 5, 10, 30, 0.1, 0.3, 1, 3, 5, 10, 30),
         drug = c(rep("ZTZ240", 7), rep("RTG", 7)))|>
# lets clear the data frame to keep only the first two columns
      select(log_dose, response,dosename, dose, drug )
      #dplyr::mutate(response=-response) # This line is no longer needed, see the note below
vdose

Important note about the vdose: the need to omit the non-tested rows

  • View “Notes” number one under the 2.3.3.2.A. Distribution plots for prior and posterior

The vdose df is derived from the Model2 parameters, which was formed using the formula response=0+drug:dose. This means that for each dose and drug combination, the model calculates a response, even if we do not have actual measurements for that combination. For example, we do not have values for 0.1 mM and 30 mM ZTZ, and 10 mM RTG, but we have some numeric values in the corresponding summary draws we have extracted. This is why the corresponding graphs look unusual above, both for posterior draw plots and prior-posterior density plots. In the next steps, we need to remove the rows corresponding to these non-tested numeric values.

vdose <- vdose[-c(2, 7, 8), ]

Note about vdose and the associated vdm and vmd1 models: non-delta v1/2 values

We have two options for modeling the ec50 data: either in terms of the difference from the ec50 of the control (delta v1/2) which will be positive, or we can model the raw values of ec50, which will be on the voltage scale and negative. However, since our data shows a negative relationship between dose and ec50 values, we will have a negative slope in our curve. The issue is that the default priors in the sigmoid_agonist_model are designed for an upward curve, while the antagonist model is for a downward curve with a slope of -1. Thus, for our negative data, we will choose the antagonist model instead of the agonist model. To do so, we need to adjust the priors to encompass negative values. Once we have adjusted the priors, we can input the values of the first and second columns into the sigma antagonist model in the BayesPharma package.

3.1.2. Prior selection (prior-only check was skipped)

Note about a change in the workflow

  • Ideally, we should select the priors and perform a prior check similar to the previous step. Alternatively, we can skip this step and directly proceed to model fitting. We can then use the prior-posterior plots to check whether the chosen priors cover an informative yet non-restrictive range.

  • As we will see later in the prior-posterior plots of vmd, the first set of prior with which we model “vdm” is not appropriate and hence we need to edit them ( prior2) with which we model vmd1.

3.1.3. Model fitting

vdm<-BayesPharma::sigmoid_antagonist_model(data=vdose, prior=prior)
## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000113 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.13 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 2.383 seconds (Warm-up)
## Chain 1:                1.814 seconds (Sampling)
## Chain 1:                4.197 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 3.2e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.32 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 2.406 seconds (Warm-up)
## Chain 2:                2.124 seconds (Sampling)
## Chain 2:                4.53 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 3e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.3 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 2.32 seconds (Warm-up)
## Chain 3:                1.81 seconds (Sampling)
## Chain 3:                4.13 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 2.7e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.27 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 2.597 seconds (Warm-up)
## Chain 4:                2.137 seconds (Sampling)
## Chain 4:                4.734 seconds (Total)
## Chain 4:

Taking a look at the vdm model:

vdm
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: response ~ sigmoid(ic50, hill, top, bottom, log_dose) 
##          ic50 ~ 1
##          hill ~ 1
##          top ~ 1
##          bottom ~ 1
##    Data: data (Number of observations: 11) 
##   Draws: 4 chains, each with iter = 8000; warmup = 4000; thin = 1;
##          total post-warmup draws = 16000
## 
## Population-Level Effects: 
##                  Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## ic50_Intercept      -5.76      0.17    -6.13    -5.47 1.00     6224     6111
## hill_Intercept      -1.33      0.43    -2.36    -0.68 1.00     6737     7421
## top_Intercept      -11.98      3.48   -18.16    -4.32 1.00     6475     5896
## bottom_Intercept   -44.90      3.02   -51.22   -39.27 1.00     7048     8090
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     3.99      1.20     2.38     7.02 1.00     6532     7323
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

3.1.4. Model check:

3.1.4.1. Diagnostics of vdm using “prior”

Let’s analyze the model provided visual and summaries of model parameters and convergence diagnostic

3.1.4.1.1 prior and posterior density plots & posterior density plots

BayesPharma::prior_posterior_densities_plot(
model = vdm,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label="Prior&posterior")
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 9e-06 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.138 seconds (Warm-up)
## Chain 1:                0.167 seconds (Sampling)
## Chain 1:                0.305 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 9e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.157 seconds (Warm-up)
## Chain 2:                0.122 seconds (Sampling)
## Chain 2:                0.279 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 8e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.08 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.152 seconds (Warm-up)
## Chain 3:                0.145 seconds (Sampling)
## Chain 3:                0.297 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 1.2e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.12 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.152 seconds (Warm-up)
## Chain 4:                0.094 seconds (Sampling)
## Chain 4:                0.246 seconds (Total)
## Chain 4:

BayesPharma::posterior_densities_plot(
model = vdm,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label = "")

Question: Do the figures for top and bottom indicated that we have to constraint the corresponding priors? Upon comparison with the priors of model2, it can be observed that the priors for top and bottom are relatively narrow. Ideally, wider priors resembling that of ec50 would be preferred to avoid biased sampling. Narrow priors could potentially lead to biased results, which may become evident during later simulations or reviewers may raise concerns within their area of expertise if they perceive that the priors are not compatible with their expert opinion. Therefore, it is advisable to use as wide priors as possible. After several trial and error attempts, I have chosen 25 as the new value for the priors (prior 2) , which seems to be adequate. Just as a reminder, I skipped the regular prior checks by straightly incorporating them to the vdm1 model we see in the next step.

## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 6.8e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.68 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 6.1 seconds (Warm-up)
## Chain 1:                7.749 seconds (Sampling)
## Chain 1:                13.849 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 2.8e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.28 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 6.009 seconds (Warm-up)
## Chain 2:                5.452 seconds (Sampling)
## Chain 2:                11.461 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 2.4e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.24 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 6.525 seconds (Warm-up)
## Chain 3:                5.981 seconds (Sampling)
## Chain 3:                12.506 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 2e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.2 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 5.871 seconds (Warm-up)
## Chain 4:                4.548 seconds (Sampling)
## Chain 4:                10.419 seconds (Total)
## Chain 4:

3.1.4.2. Diagnostics of vdm1 using “prior2”

3.1.4.2.1.prior/posterior AND posterior checks

BayesPharma::prior_posterior_densities_plot(
model = vdm1,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label="Prior & posterior")
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.654 seconds (Warm-up)
## Chain 1:                0.366 seconds (Sampling)
## Chain 1:                1.02 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 1e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.739 seconds (Warm-up)
## Chain 2:                0.318 seconds (Sampling)
## Chain 2:                1.057 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 1e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.639 seconds (Warm-up)
## Chain 3:                0.146 seconds (Sampling)
## Chain 3:                0.785 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.1 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.528 seconds (Warm-up)
## Chain 4:                0.223 seconds (Sampling)
## Chain 4:                0.751 seconds (Total)
## Chain 4:

BayesPharma::posterior_densities_plot(
model = vdm1,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label = "")

3.1.4.2.2. Posterior plot draws

BayesPharma::posterior_draws_plot(
model = vdm1,
title = "Increase in the drug dose results in lower voltage needed for ligand activation",
ylab="V1/2 (non-delta values)")

3.2. Vdose1: incorporating the delta v1/2 values

The graph above shows the absolute values of voltage for each dose. However, to compare our results with the study by Li et al. (2021) and replicate their insets, we need comparable measures. In their Figure 1 insets, the Y-axis shows the difference between the V1/2 of the control group and the V1/2 of the treatment groups. This makes the positive values represent drugs that lower the voltage needed for activation, increasing the voltage sensitivity. This way, we can visualize the effectiveness of the drugs and use the agonist models to analyze the data. To do this, we need to model curves for the control groups, run Bayes on them, and extract the ec50, which is the V1/2. Then, we can subtract the V1/2 of the treatments from this constant value. I have done this in the following code and formed a new dataset called vdose1, where the “log_dose” column represents the subtraction of the V1/2 control and the V1/2 corresponding to the treatment.

3.2.1. data prepration

3.2.1.1 Importing data for ztzcontrl: dc=>ZTZcontrl

dc<-df1[c(2:17,21:36,39:54,57:72,76:91), c(3,4:8)]

colnames(dc)[2:6] <- c("run1", "run2", "run3", "run4", "run5")

dc$run1<-as.double(dc$run1)


dc<-dc %>% 
  pivot_longer(c('run1','run2','run3', 'run4','run5'), names_to = "run number", values_to = "Acquisition")

#Taking the mean voltage needed for deltaV for ZTZ
# definig the necessary columns needed for sigmoid_agonist_model ( defininf log_dose and response)

dc<-dc|>
  dplyr::mutate (
    Voltage= as.numeric(Voltage), 
    response=Acquisition,
    log_dose=Voltage)|> 
    dplyr::filter(!is.na(Acquisition))

dc

3.2.1.2.Importing data for RTG (from df2): dc1=RTG contrl

dc1<-df2[c(1:16,19:34,38:53,56:71,75:90,94:109), c(2,3:8)]

dc1$control<-as.double(dc1$control)

colnames(dc1)[2:7] <- c("run1", "run2", "run3", "run4", "run5", "run6")

dc1<-dc1 %>% 
  pivot_longer(c('run1','run2','run3', 'run4','run5', 'run6'), names_to = "run number", values_to = "Acquisition")

dc1<-dc1|>
  dplyr::mutate (
    Voltage= as.numeric(Voltage), 
    response=Acquisition,
    log_dose=Voltage)|> 
    dplyr::filter(!is.na(Acquisition))

dc1

3.2.2. Model fitting

3.2.2.1. ZTZcontrl model

## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.00034 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 3.4 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 19.595 seconds (Warm-up)
## Chain 1:                18.311 seconds (Sampling)
## Chain 1:                37.906 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 0.000188 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.88 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 18.642 seconds (Warm-up)
## Chain 2:                16.442 seconds (Sampling)
## Chain 2:                35.084 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 0.000184 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.84 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 20.113 seconds (Warm-up)
## Chain 3:                20.073 seconds (Sampling)
## Chain 3:                40.186 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 0.000184 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.84 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 19.548 seconds (Warm-up)
## Chain 4:                16.385 seconds (Sampling)
## Chain 4:                35.933 seconds (Total)
## Chain 4:

3.2.2.2. RTG contrl model

## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000381 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 3.81 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 22.6 seconds (Warm-up)
## Chain 1:                20.382 seconds (Sampling)
## Chain 1:                42.982 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 0.000233 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 2.33 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 21.623 seconds (Warm-up)
## Chain 2:                21.297 seconds (Sampling)
## Chain 2:                42.92 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 0.000272 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 2.72 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 22.024 seconds (Warm-up)
## Chain 3:                23.502 seconds (Sampling)
## Chain 3:                45.526 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 0.000227 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 2.27 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 21.983 seconds (Warm-up)
## Chain 4:                24.985 seconds (Sampling)
## Chain 4:                46.968 seconds (Total)
## Chain 4:

3.3. Model check steps were skipped for control groups

3.4. Extracting the values needed to calculate deltaV1/2: vdose1 df

library(posterior)
dcz<-summarize_draws(vcZm)
dcz
dcr<-summarize_draws(vcrm)
dcr

3.4.1 Vdose1: incorporating the delta v1/2 values

#vdose1: taking delta V to form the response column instead of raw values for voltage
vdose1<-vdose %>% 
          mutate(response = ifelse(row_number() <= 7, -7.77 - response, -8.99 - response))

vdose1

Now that we have calculate the delta V we can use the agonist model instead of antagonist model we used for vdm1.But we need new priors that can be used for the agonist model:

3.5.1.Prior selection (check is skipped)

3.5.2.Model fitting: vdm2

vdm2 models the delta v1/2 values as the response for each drug dose (intercept is set to zero)

## Compiling Stan program...
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 8.6e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.86 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 1: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 1: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 1: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 1: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 1: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 1: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 1: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 1: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 1: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 1: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 1: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 5.926 seconds (Warm-up)
## Chain 1:                5.963 seconds (Sampling)
## Chain 1:                11.889 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 2.2e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 2: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 2: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 2: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 2: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 2: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 2: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 2: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 2: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 2: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 2: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 2: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 7.022 seconds (Warm-up)
## Chain 2:                6.069 seconds (Sampling)
## Chain 2:                13.091 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 2.3e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.23 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 3: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 3: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 3: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 3: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 3: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 3: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 3: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 3: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 3: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 3: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 3: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 6.248 seconds (Warm-up)
## Chain 3:                8.72 seconds (Sampling)
## Chain 3:                14.968 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 2.1e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 8000 [  0%]  (Warmup)
## Chain 4: Iteration:  800 / 8000 [ 10%]  (Warmup)
## Chain 4: Iteration: 1600 / 8000 [ 20%]  (Warmup)
## Chain 4: Iteration: 2400 / 8000 [ 30%]  (Warmup)
## Chain 4: Iteration: 3200 / 8000 [ 40%]  (Warmup)
## Chain 4: Iteration: 4000 / 8000 [ 50%]  (Warmup)
## Chain 4: Iteration: 4001 / 8000 [ 50%]  (Sampling)
## Chain 4: Iteration: 4800 / 8000 [ 60%]  (Sampling)
## Chain 4: Iteration: 5600 / 8000 [ 70%]  (Sampling)
## Chain 4: Iteration: 6400 / 8000 [ 80%]  (Sampling)
## Chain 4: Iteration: 7200 / 8000 [ 90%]  (Sampling)
## Chain 4: Iteration: 8000 / 8000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 8.414 seconds (Warm-up)
## Chain 4:                7.972 seconds (Sampling)
## Chain 4:                16.386 seconds (Total)
## Chain 4:
vdm2
##  Family: gaussian 
##   Links: mu = identity; sigma = identity 
## Formula: response ~ sigmoid(ec50, hill, top, bottom, log_dose) 
##          ec50 ~ 0 + drug
##          hill ~ 0 + drug
##          top ~ 0 + drug
##          bottom ~ 0 + drug
##    Data: data (Number of observations: 11) 
##   Draws: 4 chains, each with iter = 8000; warmup = 4000; thin = 1;
##          total post-warmup draws = 16000
## 
## Population-Level Effects: 
##                   Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## ec50_drugRTG         -5.93      0.61    -7.08    -4.97 1.00     3885     2952
## ec50_drugZTZ240      -5.43      0.48    -6.54    -4.65 1.00     4292     2453
## hill_drugRTG          1.13      0.62     0.27     2.57 1.00     2754     4416
## hill_drugZTZ240       0.99      0.48     0.31     2.20 1.00     4160     3709
## top_drugRTG          39.68      8.31    29.96    62.90 1.00     3732     3467
## top_drugZTZ240       51.28     15.54    28.21    87.54 1.00     4861     5943
## bottom_drugRTG       -0.28     14.71   -39.10    16.77 1.00     3458     5005
## bottom_drugZTZ240     0.05      8.61   -23.62    10.96 1.00     4718     2951
## 
## Family Specific Parameters: 
##       Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma     3.67      1.76     1.70     8.39 1.00     1856     2100
## 
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
# we need to convert back to nonlog and also depict the range l-95% and u-95%

3.5.3.Model check:

We run the regular checks here as well to see whether we did ok.

3.5.3.1. Posterior and prior/posterior density plots

BayesPharma::prior_posterior_densities_plot(
model = vdm2,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label="Prior & posterior")
## Start sampling
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 1.1e-05 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.415 seconds (Warm-up)
## Chain 1:                0.196 seconds (Sampling)
## Chain 1:                0.611 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 7e-06 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.452 seconds (Warm-up)
## Chain 2:                0.123 seconds (Sampling)
## Chain 2:                0.575 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 7e-06 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.387 seconds (Warm-up)
## Chain 3:                0.189 seconds (Sampling)
## Chain 3:                0.576 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 9e-06 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.441 seconds (Warm-up)
## Chain 4:                0.219 seconds (Sampling)
## Chain 4:                0.66 seconds (Total)
## Chain 4:

BayesPharma::posterior_densities_plot(
model = vdm2,
predictors_col_name = "dose",
half_max_label = "ec50",
title_label = "Posterior density plots")

3.5.3.2. Posterior plot draws

BayesPharma::posterior_draws_plot(
model = vdm2,
title = "Different dose-response curves for ZTZ and RTG",
ylab="V1/2 (Delta values)")

Convert the numbers to micro-molar so we can compare to those of the paper:

#RTG
#Center:
round((10^-5.92)*10^6, 2)
## [1] 1.2
##Upper CI
round((10^-5.92 + 10^-5.03)*10^6, 2)
## [1] 10.53
##lower CI
round((10^-5.92 - 10^-7.03)*10^6, 2)
## [1] 1.11
#paper ranges: 
1.56+0.79
## [1] 2.35
1.56-0.79
## [1] 0.77
#ZTZ
##Center:
round((10^-5.43)*10^6, 2)
## [1] 3.72
##Upper CI
round((10^-5.43 + 10^-4.66)*10^6, 2)
## [1] 25.59
##Lower CI
round((10^-5.43 - 10^-6.51)*10^6, 2)
## [1] 3.41
#paper ranges: 
3.98+0.33
## [1] 4.31
3.98-0.33
## [1] 3.65

Results

Drug name Paper results (traditional approach) Bayesian approach’s results
RTG

Mean : 1 . 5 9 ± 0 . 7 9

UL: 2.35

LL: 0.77

Mean: 1.2

UL: 10.53 (1.2 + 9.33)

LL: 1.11 (1.2 - 0.08)

ZTZ

Mean: 3.98 ± 0.33

UL: 4.31

LL: 3.65

Mean: 3.52

UL: 25.59 (3.52 + 22.07)

LL: 3.41 (3.52 + 0.11)

The range ec50 from bayesian vs. traditional approach for RTG and ZTZ Upper Limit=UL, Lower Limit =LL

Conclusion

The numerical ranges presented above demonstrate that the frequentist point estimates and their corresponding ranges reported in the paper may be overconfident. In contrast, our Bayesian approach yields broader ranges that more realistically capture the uncertainty in the data, considering possible limitations such as small sample size, missing values, and outliers. However, further testing and evaluation are needed to determine which of these models provide more robust and accurate information.

Further direction

The rotation ended in the middle of the MuSyc model fitting. This package provides the framework to model the synergistic effects of two interventions. The raw codes are included in the draft along with the main paper explaining the MuSyc model in the resource section.

Resources

  1. Link to the main paper:

https://www.nature.com/articles/s41422-020-00410-8

  1. Google doc for the biblio of the Bayesian stats. note the seciton on Bayesian & clinical pharmacology
  1. paper for MusyC model

https://www.nature.com/articles/s41467-021-24789-z

https://docs.google.com/document/d/1AjmA4p8NqF-k22kh-Ogf5m9qkh1j5i-zzvzRy70W0kQ/edit?usp=sharing

  1. R for data science

https://r4ds.had.co.nz/tidy-data.html

Specifically on Bayesian & clinical pharmacology

(Polack et al. 2020) important one Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine First report of an efficacious vaccine against SARS-CoV-2 used Bayesian Inference https://towardsdatascience.com/bayesian-statistics-of-efficacy-of-pfizer-biontech-covid-19-vaccine-part-i-efac8d4e0539 (Senn 2022) The design and analysis of vaccine trials for COVID-19 for the purpose of estimating efficacy MediumMedium Bayesian Statistics of Efficacy of Pfizer-BioNTech COVID-19 Vaccine — part I Vaccine Efficacy and Beta-binomial model Reading time 12 min read

General notes

Code notes:

  1. Note about the code chunks:
  • You can name the code chunk with any string as long as it is unique in the document.

  • Set the model to use all computer cores as needed : options(ms.cores=4)

  1. A note about the mutate funciton: The “mutate” function allows users to create new columns in a data frame or modify existing columns using various data transformation functions, mathematical operations, logical operations, or custom functions. The resulting data frame will have the original columns along with the newly created or modified columns based on the specified rules.

  2. Notes about the Git version control:

At the moment the Git doesn’t work using the “Diff” tab in the upper right section. we have to manually create new commits by this code in the console : $ git commit -m “version’s name e.g., maomlab v1”

General stat notes (stats etc.)

Dose-response models:

Q: What are some common types of pharmacological models used to describe dose-response relationships?

Sigmoidal Dose-Response Models:

  • Description: Describe the relationship between dose/concentration and response in a sigmoid-shaped curve.
  • Examples: Logistic, Gompertz, Hill models.
  • Common use: Characterize the shape of dose-response curves. ###efficacy and potency dose-response curves:

Sigmoidal Dose-Response Curve potency Sigmoidal Dose-Response Curve efficacy

Hyperbolic Models:

  • Description: Assume a hyperbolic relationship between dose/concentration and response.
  • Examples: Michaelis-Menten model.
  • Common use: Describe enzyme kinetics and other applications.

Emax Models:

  • Description: Assume that the response increases with dose/concentration and reaches a maximum (Emax).
  • Common use: Describe the relationship between drug concentration and pharmacodynamic effects.

Probit Models:

  • Description: Describe the probability of a binary response (e.g., presence or absence of an effect) as a function of dose/concentration.
  • Common use: Estimate the dose/concentration that produces a certain probability of response, often used in toxicology and pharmacology.

Definition of degree of freedom: Q&A

1. Why are mean and variance considered dependent, and why do we subtract them? Does this mean that since they are calculated based on observations, they are considered dependent on the observations and therefore not considered to have freedom? In statistical tests, the degrees of freedom are used to estimate parameters, such as the mean and variance, from the sample data. The reason why mean and variance are considered dependent and subtracted to calculate degrees of freedom is because they are estimated from the sample data and are not known with certainty.

When estimating parameters, there is a trade-off between the amount of information available and the uncertainty associated with the estimates. In the case of mean and variance, they are estimated from the sample data, which means that the estimates are influenced by the specific observations in the sample. As a result, the estimates of mean and variance are not considered fully independent, as they are influenced by the data available in the sample.

To account for this dependence, degrees of freedom are used in statistical tests to represent the effective number of independent pieces of information available for estimation. By subtracting the number of parameters being estimated from the total sample size, we account for the reduction in effective sample size due to the estimation of parameters from the sample data.

In simpler terms, degrees of freedom represent the number of independent pieces of information available for estimation after accounting for the estimation of parameters from the sample data. The estimation of parameters, such as mean and variance, from the sample data reduces the effective sample size available for estimation, and degrees of freedom are used to account for this reduction in independence

2. Is it true that by calculating more statistics, the degrees of freedom are decreased?

Yes, that’s correct. When estimating multiple parameters or statistics from a sample data, the degrees of freedom can decrease as more parameters or statistics are estimated. This is because the estimation of additional parameters or statistics from the sample data reduces the effective sample size available for estimation of each individual parameter or statistic, leading to a reduction in the degrees of freedom. In other words, estimating more parameters or statistics from the sample data can result in a lower number of degrees of freedom, which can affect the accuracy and reliability of the statistical tests and inferences.

3. What happens to the degrees of freedom in a linear regression model as we increase the number of x variables to build a better model?

The degrees of freedom typically decrease. This is because each x variable added to the model introduces an additional parameter (i.e., regression coefficient) that needs to be estimated from the data. As a result, the effective sample size available for estimating each individual parameter decreases, leading to a decrease in the degrees of freedom. In other words, the more x variables added to the model, the fewer degrees of freedom are available for estimating the parameters, which can impact the accuracy and reliability of the statistical inferences and model evaluations.

4. Is there an inverse relationship between entropy and degrees of freedom (DF)?

Yes, in a sense, the relationship between entropy and degrees of freedom (DF) can be considered inverse.

Entropy is a measure of uncertainty or information content in a system, with higher entropy indicating higher uncertainty or lack of information, and lower entropy indicating lower uncertainty or more information. In the context of Bayesian statistics, entropy is often used to represent the uncertainty in the prior distribution, which is the initial belief about the parameters of interest before incorporating data.

On the other hand, degrees of freedom (DF) are used in parametric statistical tests to represent the amount of information available in the data, typically associated with the sample size. Higher degrees of freedom are associated with larger sample sizes and more information in the data, while lower degrees of freedom are associated with smaller sample sizes and less information.

In general, as the degrees of freedom increase, the uncertainty in the data decreases, and vice versa. This means that when more information is available in the data (higher DF), the uncertainty or entropy associated with the data tends to decrease, and when less information is available (lower DF), the uncertainty or entropy tends to increase. However, it’s important to note that the relationship between entropy and degrees of freedom can be influenced by various factors and may not always be a strict inverse relationship. It depends on the specific context and statistical techniques being used.

Degree of freedom in frequentist vs. bayesian stats

Can both degrees of freedom and entropy be considered as measures of uncertainty in the data?? Yes, in a sense, both degrees of freedom and entropy can be considered as measures of uncertainty in the data, but in different contexts.

Degrees of freedom (DF) typically refers to the number of values in the final calculation of a statistic that are free to vary. In statistical tests, it is often associated with the number of independent pieces of information available in the data, which affects the variability or uncertainty of the estimated statistic. Higher degrees of freedom generally indicate a larger sample size or more information available, which can reduce the uncertainty or variability of the estimated statistic.

Entropy, on the other hand, is a concept from information theory that measures the uncertainty or randomness in a probability distribution. In the context of Bayesian statistics, entropy is often used to represent the uncertainty or lack of information in the prior distribution, which is a subjective belief about the unknown parameters before considering the data. As more data is incorporated into the Bayesian analysis, the entropy of the posterior distribution (which represents the updated belief about the parameters after considering the data) typically decreases, indicating a reduction in uncertainty or increased confidence in the estimated parameters.

So, while both degrees of freedom and entropy can be considered as measures of uncertainty in the data, they are used in different statistical contexts and have different interpretations. Degrees of freedom are typically associated with the variability or uncertainty of estimated statistics in frequentist statistics, whereas entropy is used to represent the uncertainty or lack of information in probability distributions, particularly in Bayesian statistics.

In statistics, degrees of freedom refer to the number of values in the final calculation of a statistic that are free to vary. The concept of degrees of freedom is used in various statistical methods, such as hypothesis testing, confidence interval estimation, and model estimation, to determine the variability and uncertainty associated with the estimates or test statistics. The definition and interpretation of degrees of freedom can differ slightly between frequentist and Bayesian statistics.

In frequentist statistics, degrees of freedom typically arise in the context of parametric statistical tests, such as t-tests or F-tests, where assumptions about the underlying population distribution are made. Degrees of freedom are used to determine the appropriate critical values or quantiles from a probability distribution, such as the t-distribution or the F-distribution, that are used in hypothesis testing or confidence interval estimation. In this context, degrees of freedom represent the number of independent pieces of information that are available to estimate a parameter or perform a statistical test.

In Bayesian statistics, degrees of freedom are not always explicitly defined, as Bayesian methods do not rely on fixed or predefined probability distributions. Bayesian statistics often use prior distributions that are updated with data to obtain a posterior distribution. The posterior distribution is then used for inference, and degrees of freedom may not be explicitly calculated or used in the same way as in frequentist statistics. Bayesian methods tend to focus more on the posterior distribution and its characteristics, such as the shape, center, and spread, rather than degrees of freedom specifically.

It’s important to note that while the concept of degrees of freedom is used in both frequentist and Bayesian statistics, its interpretation and application can differ depending on the specific statistical method, model, or context being used.

About degrees of freedom in frequentist stat vs. entropy in Bayesian stats

Entropy is a concept from information theory that measures the amount of uncertainty or randomness in a probability distribution. It is used to quantify the uncertainty or lack of information about a random variable.

On the other hand, degrees of freedom in statistics typically refer to the number of values that are free to vary in the calculation of a statistic. Degrees of freedom are often associated with estimating parameters or performing statistical tests, and they represent the number of independent pieces of information available for estimation.

In Bayesian statistics, entropy can be used as a measure of the amount of information contained in a posterior distribution. High entropy indicates a more uncertain or diffuse distribution, while low entropy indicates a more concentrated or informative distribution. Entropy can be used in Bayesian model selection, model comparison, or decision-making, among other applications.

Degrees of freedom, on the other hand, are often used in frequentist statistical methods to determine critical values, calculate standard errors, or perform hypothesis tests. They typically arise in the context of parametric statistical tests that assume a specific probability distribution, such as the t-distribution or the F-distribution. Degrees of freedom represent the number of independent pieces of information available for estimating parameters or performing statistical tests, and they are not directly related to entropy in Bayesian statistics.

In summary, while both entropy and degrees of freedom are concepts used in statistics, they have different interpretations and applications in Bayesian and frequentist statistics, respectively, and they are not equivalent to each other.

The interesting point when comparing two statistical approaches, frequentist and Bayesian, using degrees of freedom (DF) and entropy, is that in frequentist statistics, you start with a simple model and gradually add complexity to capture the data structure (by adding DF). However, the tools used in frequentist statistics may be more adapted to simpler models, and thus may have a bias or preference for simpler models when making the model more complex. This means that there is a limitation on how much complexity you can introduce to the data. On the other hand, in Bayesian statistics, you start with a broader spectrum (prior) that has higher entropy or uncertainty, and as more data is introduced, the entropy of the posterior distribution decreases (in an ideal case). This can be observed in the graphs, where the posterior distribution is narrower than the prior. If the posterior distribution increases in entropy, it may indicate that something is wrong with the model or the data, and they are not a good fit.

LOO: Leave-One-Out (LOO)

Is a cross-validation technique used in statistics and machine learning to estimate the performance of a predictive model. It involves fitting the model multiple times, leaving out one observation at a time, and then using the model to predict the left-out observation. This process is repeated for all observations in the dataset.

In terms of entropy measurement, LOO can provide an estimate of the prediction performance of a model by quantifying how well the model generalizes to unseen data. It can help assess the variability and uncertainty in the model’s predictions by evaluating its performance on different subsets of the data.

LOO can be used to estimate the out-of-sample prediction error, which is a measure of how well the model is likely to perform on new, unseen data. A lower out-of-sample prediction error suggests a more accurate and reliable model, while a higher out-of-sample prediction error indicates higher uncertainty and potential overfitting.

By assessing the model’s performance on different subsets of the data and estimating the out-of-sample prediction error, LOO can provide insights into the entropy or uncertainty associated with the model’s predictions, helping in model selection and evaluation.

Hill equation

The Hill equation is a mathematical equation commonly used in biochemistry and pharmacology to describe the relationship between the concentration of a ligand (e.g., a drug) and the response or activity of a biological system, such as an enzyme or a receptor. It was first introduced by the American biochemist Archibald Hill in 1910 and has since been widely used in various fields of science, including biology, biochemistry, pharmacology, and physiology.

The general form of the Hill equation is:

Response = ( [L]^n ) / ( (K^n) + ([L]^n) )

where:

“Response” represents the observed biological response or activity. “[L]” represents the concentration of the ligand (e.g., drug) being applied. “n” is the Hill coefficient, which describes the steepness or cooperativity of the response. It can be a positive or negative value, and determines how the response changes with respect to changes in the ligand concentration. “K” is the concentration of the ligand at which the response is half-maximal, also known as the dissociation constant or the half-maximal effective concentration (EC50). The Hill equation is often used to model dose-response relationships in pharmacology, where it can provide insights into the potency, efficacy, and cooperativity of drug effects on biological systems. It has also been used in other areas of science to describe various types of interactions and relationships between variables beyond ligand-receptor interactions. It is important to note that the Hill equation is a mathematical model and should be used appropriately and with caution, considering the specific context and assumptions of the system being studied.

Is it possible for the Hill coefficient (n) to be negative when describing the effect of an antagonist drug using the Hill equation?

No, in the context of the Hill equation, the Hill coefficient (n) is typically used to describe the cooperativity or steepness of the response curve for agonist drugs, which are substances that activate a receptor and typically result in a biological response. The Hill coefficient is a positive value in most cases, indicating positive cooperativity, meaning that the binding of one ligand molecule increases the affinity or efficacy of subsequent ligand bindings, resulting in a steeper response curve.

For antagonist drugs, which are substances that block or inhibit the activity of a receptor, the Hill coefficient is typically not used in the same way as for agonists. Antagonists generally do not exhibit cooperativity, as their primary effect is to block or reduce the activity of the receptor without directly inducing a biological response. The response curve for an antagonist is often described using other parameters, such as the dissociation constant (Kd) or the inhibition constant (Ki), which quantify the affinity or potency of the antagonist in inhibiting the receptor activity.

It’s important to note that the Hill coefficient and other parameters used in pharmacological models, including the Hill equation, are empirical measures that depend on the specific experimental conditions and system being studied. The interpretation of these parameters should be done with caution and in the appropriate context, considering the specific characteristics of the drug, receptor, and biological system being investigated.

Notes about figure 1, MusyC paper:

  1. Two-state transition model for a single drug system: This refers to a theoretical model that describes the effects of a single drug on cells, where cells can be in two states - unaffected (indicated in red with “U”) or affected (indicated in cyan with “A1”).

  2. Law of Mass Action: This is a principle from chemistry that describes how the concentration of reacting substances affects the rate of a chemical reaction. In this context, it’s used to describe how the concentration of a drug affects its interaction with cells.

  3. parameter Hill equation: This is a mathematical equation used to fit dose-response relationships, which describes how the effects of a drug change with different doses. It has four parameters that measure the drug’s efficacy (the difference between the maximum and minimum effects), potency (the concentration of the drug needed to produce a certain effect), and cooperativity (how the drug’s effects are influenced by other factors).

  4. MuSyC (Multi-Symptom Complex): This is a four-state state-transition model of combination pharmacology, which describes how two drugs interact with each other when used together. It’s based on the Law of Mass Action and results in a 2D Hill-like equation that describes a dose-response surface, which is a graphical representation of how the combined effects of the two drugs change with different doses.

  5. Parameters of the 2D Hill equation: The 2D Hill equation has additional parameters (β, α, γ) compared to the single drug Hill equation, which describe how the combination of two drugs affects their efficacy (the maximum effect), potency (the concentration needed for a certain effect), and cooperativity (how their effects interact with each other). These parameters have different values depending on the specific drug combination and can be interpreted as synergistic efficacy (β), synergistic potency (α), and synergistic cooperativity (γ).

  6. Edge notations: These refer to specific conditions at the boundaries of the dose-response surface, where the effects of the two drugs are compared at very high doses of one drug. Different edge notations correspond to different scenarios of drug interaction and can affect the values of the parameters in the 2D Hill equation.

In summary, the passage describes mathematical models used to understand the effects of single drugs and drug combinations on cells, and how different parameters in these models describe the efficacy, potency, and cooperativity of the drugs in different scenarios.

Notes about the x-axis of insets of Fig.1 dose-response curves

Although the X axis appears to be based on the real numbers of dose for each drug (RTG and ZTZ240), they are actually based on their log value as a tradition. You can see that if you take a closer look at the axis x and see the tick marks are unbalanced and very close on some parts and they correspond to the log value of that specific dose. e.g.,

Tiny note about the use of log in pharmacological papers:

What is a common approach for the pharmacological papers in term of reporting dose? -log?

In pharmacological papers, the common approach for reporting dose is to use a logarithmic scale, specifically a base-10 logarithmic scale. This is because the relationship between dose and response in pharmacology often follows a logarithmic curve, known as the dose-response curve.

Using a logarithmic scale allows for a wider range of doses to be included on a single graph or chart without the curve becoming too steep or too shallow, which can make it difficult to interpret the data.

By using a logarithmic scale, the distance between the tick marks on the x-axis becomes proportional to the ratio between doses, rather than the difference between doses. This makes it easier to compare the effects of different doses, especially at low doses where small differences in dose can have a significant impact on the response.

Therefore, it is common to see pharmacological papers report doses on a logarithmic scale, often in units such as log10(mg/kg) or log10(M).

What is the Sigma estimate in Bayes?

In Bayesian statistics, the sigma estimate refers to the uncertainty or variability of the model’s predictions. Specifically, it is the standard deviation of the distribution of possible outcomes or predictions based on the model.

To give a concrete example, suppose we have a linear regression model that predicts a response variable y based on one or more predictor variables x. The model can be written as:

y = β₀ + β₁x₁ + β₂x₂ + … + ε

where β₀, β₁, β₂, etc. are the coefficients of the model, and ε is a random error term with mean 0 and standard deviation σ.

In Bayesian statistics, we would put a prior distribution on the values of the coefficients and the value of σ. We would then use Bayes’ theorem to update our beliefs about the coefficients and σ based on the data we observe.

The posterior distribution of σ would then tell us the uncertainty or variability of the model’s predictions. A higher value of σ would indicate that the model’s predictions are more uncertain or variable, while a lower value of σ would indicate that the model’s predictions are more certain or precise.

Where to use Bayes and where frequentist approach?

In the frequentist approach, the null hypothesis is usually a simple statement about the population parameter of interest, such as “the mean difference between two groups is zero” or “there is no association between two variables.” The frequentist approach then uses sample data to calculate a test statistic and p-value, which can be used to determine whether the null hypothesis should be rejected or not.

However, when dealing with more complex models or inferences, the frequentist approach can sometimes fall short. This is because the frequentist approach tends to focus on point estimates of parameters, without considering the uncertainty surrounding those estimates. This can be problematic, as it can lead to overconfidence in our results and an underestimation of the true variability in the data.

This is where the Bayesian approach comes in. The Bayesian approach naturally incorporates uncertainty by using prior distributions to represent our beliefs about the parameters before observing any data, and updating those beliefs in light of the observed data to obtain a posterior distribution. The posterior distribution represents our updated beliefs about the parameters, taking into account both the prior information and the observed data.

The Bayesian approach also allows for more flexibility in model specification and can handle more complex models with ease. Bayesian models can include multiple levels of randomness, hierarchical structures, and complex interactions between variables. In addition, the Bayesian approach provides a natural framework for model comparison, which can be used to evaluate the relative support for different models.

Overall, the frequentist approach can be a useful tool for answering simple research questions and testing hypotheses, but for more complex models and inferences, the Bayesian approach offers a more flexible and informative framework that can incorporate uncertainty and handle more complex models with ease.

Glossary

1. nlpar: non-linear parameter

Draft

Note: the code chunks were inactivated.

  1. Diagnostics using an interactive web tool kit
library(shinystan)
shinystan::launch_shinystan(vdm)

#Proceeding to alternative models ##MuSyC synergy model link to the github code: https://github.com/maomlab/BayesPharma/blob/main/R/model-MuSyC-formula.R

###The MusyC formula MuSyC_formula <- function( predictors = 1, …) {

if (!is.null(predictors)) { predictor_eq <- rlang::new_formula( lhs = quote( logE0 + logC1 + logE1 + h1 + logC2 + logE2 + h2 + logE3 + logalpha), rhs = rlang::enexpr(predictors)) } else { predictor_eq <- NULL }

brms::brmsformula( response ~ MuSyC( logd1 - logd1scale, logd2 - logd2scale, logE0, logC1, logE1, h1, logC2, logE2, h2, logE3, logalpha), predictor_eq, nl = TRUE, …) } Footer © 2023 GitHub, Inc.

####Parameters

mp<-plot_synergy_checkerboard(data=dF |> mutate(dose1=10^(log_dose), dose2=as.numeric(dose)))

Note: we need to have separate plots for each drug:

#First for the MZTZ240 using "df"
##NOtes, Mat took an exp og log_dose which turns in voltage it self. so I just plugged in Voltage ( see above formula for Mat's intial code after adjusting in response to an error which i dont remember)
mp<-plot_synergy_checkerboard(data=df |>
      mutate(dose1=as.numeric(Voltage),
             dose2=dose, 
             response=Acquisition), 
      treatment_1_label="Voltage",
      treatment_2_label="MZTZ240 dose")


mp+ ggplot2::coord_cartesian()   
#Second for the RTG using "df3"
mp1<-plot_synergy_checkerboard(data=df3 |>
      mutate(dose1=Voltage,
             dose2=as.numeric(dose)), 
      treatment_1_label="Voltage",
      treatment_2_label="RTG dose")


mp1+ ggplot2::coord_cartesian() 

Here the black parts show that the intervals between the X and Y do not correspond. i.e., more interval between each level where there were no acquisition and hence black/grey simply mean no number.

Now the model itself

MUSyC for RTG (df3)

First converting the dose to numeric

#df3: RTG
#Take the mean of Voltage ( logd1scale paramter of the model)

#Converting drug dose to as.numeric for df3. NOTE: for previous analysis we needed a factor ( I think it was for imaging purposes), 
# NOTE: to convert from factor to numeric you have to first convert to string before numeric otherwise it will assign order numbers and not the real numbers to it. 

df3$dose<-as.character(df3$dose)

str(df3)

#now converting into numeric

df3$dose<-as.numeric(df3$dose)

str(df3)
Musm<-BayesPharma::MuSyC_model (data=df3|>
      mutate(logd1=Voltage, logd2=dose),
      prior = MuSyC_prior(),
      init = MuSyC_init(),
      formula = MuSyC_formula()
             
             )

NysyC for MZTZ240

Musm2<-BayesPharma::MuSyC_model (data=df|>
      mutate(logd1=Voltage, logd2=dose),
      prior = MuSyC_prior(),
      init = MuSyC_init(),
      formula = MuSyC_formula()
             
             )

Model specs

Musm

Interpreting the Musm:

Estimate: The estimated value of the parameter “sigma” is 0.06.

Est.Error: The estimated error or uncertainty associated with the estimated value of “sigma” is 0.00, indicating that the estimate is precise.

l-95% CI: The lower bound of the 95% confidence interval for “sigma” is 0.06, indicating that the parameter value is not likely to be lower than this value with 95% confidence.

u-95% CI: The upper bound of the 95% confidence interval for “sigma” is 0.07, indicating that the parameter value is not likely to be higher than this value with 95% confidence.

Rhat: The Rhat value of 1.00 indicates good convergence, suggesting that the MCMC chains have converged well and the estimated value of “sigma” is likely to be reliable.

Bulk_ESS: The Bulk_ESS value of 3219 indicates the effective sample size for “sigma” in terms of independent samples, suggesting that there are a sufficient number of independent samples to estimate the parameter reliably.

Tail_ESS: The Tail_ESS value of 2505 also indicates the effective sample size for “sigma” in terms of independent samples, further indicating that the estimates of the parameter are reliable and not overly influenced by autocorrelation in the MCMC samples.

Musm2

#Model interpretation the C parameters are on the dose scale ( x axis) of either voltage or drug. As can be seen in the MusyC model descrpition, the model incoprotes the dose with the following manipulation to fit a better model:

logd1 - logd1scale> this is done to center the dose parameters. Consequently, we have to convert it back to the de-centered value when we want to interpret the resulting model parameter. i.e., add the mean to the extracted parameter.

###Interpreting the Musm2: - In Bayesian statistics, the Rhat value is a measure of convergence of Markov Chain Monte Carlo (MCMC) samples. An Rhat value close to 1.00 indicates that the chains have converged well and that the estimates are reliable. In this case, the Rhat value of 1.00 indicates good convergence, suggesting that the MCMC algorithm has sampled the parameter space sufficiently and the estimated value of the parameter (sigma) is likely to be reliable.

  • Bulk_ESS and Tail_ESS are estimates of the effective sample size, which measures the number of independent samples that provide the same amount of information as the correlated samples generated by the MCMC algorithm. A larger effective sample size indicates a more reliable estimate of the parameter.

  • In the given results, the Bulk_ESS and Tail_ESS values for the sigma parameter are 3062 and 2388, respectively. These values indicate that there are a sufficient number of effective independent samples for this parameter, suggesting that the estimates of the parameter are reliable and not overly influenced by autocorrelation in the MCMC samples.

##Extract C parameters logd1 + logd1scale

This center treatment values around the zero. ( see the documentation here https://github.com/maomlab/BayesPharma/blob/main/R/model-MuSyC-run.R)

  • We centered the parameters to make the model easier to fit for numeric stability. So if we want the true treatment dose ( whatever it is drug/voltage) we would have re-add the logd1scale and take an exponential of it ( for voltage and dose here we don’t need to take the exp since they are logged).

df3 (RTG) data interpretation

Converting the C parmaters

df3logd1scale<-mean(df3$Voltage)

df3logd2scale<-mean(df3$dose)

##add up to the corresponding C parameters
#extracting the c paramaters from the model
Musmpara<-posterior::summarise_draws(Musm)

reslution to the musmpara problem

Musmpara<-Musmpara|>data.frame()
C1df3<-df3logd1scale + Musmpara[2,2]
C2df3<-df3logd2scale + Musmpara[5,2]

C1df3
C2df3

df (MZTZ40) data conversion

dflogd1scale<-mean(df$Voltage)

dflogd2scale<-mean(df$dose)

##add up to the corresponding C parameters

#extracting the c parameters from the model
Musm2para<-posterior::summarise_draws(Musm2)



C1df<-dflogd1scale + Musm2para[2,2]
C2df<-dflogd2scale + Musm2para[5,2]

C1df
C2df
#Calculating the logd1scale (Voltage)

dfmean<-df|>
  dplyr::summarize(logd1scale=mean(Voltage), logd2scale=mean(dose))


dfmean

TODO lists

###TODO 13/04 - need to correct the following error: - what to the min circles represent in fig1: os they are symmetric i perceive them to represent how each of the treatment affect the response in more or less the same way ( both antagonist for instance) - need to imagine Bayes in this context ##logbook 20 april 1. the dose of 3 and 5 micromolar were wrongly inserted, this was corrected 2. Also somewhere I had switched the df3 dose with its molar values and somehow positive ones :)) that is very wrong and no idea how and why :)))) so that need to be corrected before the next step. 3. TODO tomorrow: a. Need to redo the whole code, clean the code, cleaned and ran until line 392. b. mat: do we still need to flip the chart given the change yes: rerun the rest of the code no: delete the “flip the chart chunck” and re-run the rest of the code with the new data

#logbook21st april 1. need to plot graphs for the MusyCmodel parameters. 2. Get an HTML output to have the overall look (without the output for stan and also the models) 2. wrap up& presentation making: what do the data show us? for this later we need to go back to the theory

#Errors: let’s keep the record of wrong attempts just in case:

Alter the “top” and “bottom” prior values to

The top and bottom are flipped. so lets constraint the range we previously defined for top and bottom:

{r prior_adjust2} prior=BayesPharma::sigmoid_agonist_prior( top = brms::prior(normal(0, 20), nlpar = “top”), bottom=brms::prior(normal(-50, 20), nlpar = “bottom”))

lets pass that into the model

vdm2<-BayesPharma::sigmoid_agonist_model(data=vdose, prior=prior)

vdm2

##DO WE HAVE TO FLIP THE CHART NOW?

We need to flip the chart ( looking at the values they more correspond to an anatgonist model than agnoist, this is because in the main paper you have the absolute values and ergo all positive here we have the real numbers that start with -11 and end with -40 so it results in an inverted sigma as opposed to the paper graph)

Summary end of week 5: we look at the shiny stand graphs to figure out where the problem was ( don’t have the video). the top and bottom are flipped and not responding with the agonist model ( looking at vdose we see that higher dose corresponds to lower reponds and vice versa) so it resembles an antagonis.this happened because the paper used the absolute number we are using the number with the real signs ( negatives). to address this we can 1. use the antagonist model

2. multipy the column by -1 so we can have the reverse sign and redo the model ( see above)

fit this model

{r prior_adjust3} prior=BayesPharma::sigmoid_agonist_prior( top = brms::prior(normal(50, 20), nlpar = “top”), bottom=brms::prior(normal(0, 20), nlpar = “bottom”))

lets pass that into the model

vdm2<-BayesPharma::sigmoid_agonist_model(data=vdose, prior=prior)

vdm2

lets take the exp for the model parameters here manually

logC1: {r} exp(0.94)